Building Life-like 'Conscious' Software Agents
نویسنده
چکیده
Here we'll briefly describe action selection and language generation mechanisms in two "lifelike" software agents, CMattie and IDA, and discuss issues that bear on the topics of architectures for behavior control, interdependencies between emotions and goal-based behaviors, and coordination of scripted and improvised behaviors. These agents are life-like in the sense of interacting with humans via email in natural language. They are "conscious" only in the sense of implementing a psychological theory of consciousness (Baars 1988, 1997). At this writing we are exploring the transition from scripted language production to more improvised speech generation. We are also investigating deliberative behavior selection mechanisms whereby alternative scenarios are produced and evaluated, and one of them chosen and acted upon. Agents: Autonomous, Cognitive and “Conscious” The rise of the web has spawned a use of the word “agent” in other than its more common meaning as in “travel agent” or “insurance agent” or “real-estate agent.” In this new context an agent is a piece of software, a computer program, that in some sense acts on it’s own, typically in the service of its user, a person. Here I’ve chosen to define the technical term “autonomous agent,” at least partly to avoid the seemingly endless debates over exactly what is an agent. An autonomous agent, in this paper, is a system situated in, and part of, an environment, which senses that environment, and acts on it, over time, in pursuit of its own agenda. It also acts in such a way as to possibly influence what it senses at a later time (Franklin and Graesser 1997). These autonomous agents include biological agents such as humans and most, perhaps all, animals. They also include some mobile robots, like the robots that deliver pharmaceuticals in hospitals. Some computational agents such as many artificial life agents who “live” in artificial environments designed for them within computer systems (Ackley & Littman, 1992) are also autonomous agents. Finally, so also are the objects of our attention in this paper, software agents, at least some of them. These autonomous software agents include task-specific agents like spiders that search for links on the web, such entertainment agents as Julia (Mauldin,1994), and, much to the regret of many of us, computer viruses. The class also includes the “conscious” software agents we’ll describe here. 1 Supported in part by ONR grant N00014-98-1-0332 2 With essential contributions from the Conscious Software Research Group including Art Graesser, Satish Ambati, Ashraf Anwar, Myles Bogner, Arpad Kelemen, Irina Makkaveeva, Lee McCauley, Aregahegn Negatu, Uma Ramamurthy, Zhaohua Zhang 3 [email protected], www.msci.memphis.edu/~franklin But the notion of autonomous agent turns out to be too broad for our needs; it even includes a thermostat. So, let’s restrict it to make it more suitable. Suppose we equip our autonomous agent, with cognitive features, interpreting “cognitive” broadly so as to include emotions and such. Choose these features from among multiple senses, perception, short and long term memory, attention, planning, reasoning, problem solving, learning, emotions, moods, attitudes, multiple drives, etc., and call the resulting agent a cognitive agent (Franklin 1997). Cognitive agents would include humans, some of our primate relatives, and perhaps elephants, some cetaceans, and perhaps even Alex, an African grey parrot (Pepperberg 1994). Examples are rare among non-biological agents, perhaps eventually including Rod Brooks’ humanoid robot Cog (Brooks 1997)]], some agents modeled on Sloman’s architectural scheme (Sloman 1996), our own software agents VMattie, CMattie and IDA to be described below, and a handful of others. Though quite ill defined, cognitive agents can play a useful, even a synergistic role in the study of human cognition, including consciousness. Here’s how it can work. A theory of cognition constrains the design of a cognitive agent that implements that theory. While a theory is typically abstract, functional and only broadly sketches an architecture, an implemented design must provide a fully articulated architecture, and the mechanisms upon which it rests. This architecture and these mechanisms serve to flesh out the theory, making it more concrete. Also, every design decision taken during an implementation constitutes a hypothesis about how human minds work. The hypothesis says that humans do it the way the agent was designed to do it, whatever “it” was. These hypotheses will suggest experiments with humans by means of which they can be tested. functional and only broadly sketches an architecture, an implemented design must provide a fully articulated architecture, and the mechanisms upon which it rests. This architecture and these mechanisms serve to flesh out the theory, making it more concrete. Also, every design decision taken during an implementation constitutes a hypothesis about how human minds work. The hypothesis says that humans do it the way the agent was designed to do it, whatever “it” was. These hypotheses will suggest experiments with humans by means of which they can be tested. Conversely, the results of such experiments will suggest corresponding modifications of the architecture and mechanisms of the cognitive agent implementing the theory. The concepts and methodologies of cognitive science and of computer science will work synergistically to enhance our understanding of mechanisms of mind. I have written elsewhere in much more depth about this research strategy (Franklin 1997). A paper currently being prepared will detail hypotheses suggested by the CMattie model to be described below (Bogner, Franklin, Graesser and Baars, in preparation). Here we’ll be concerned with “conscious” software agents. These agents are cognitive agents in that they consist of modules for perception, action selection (including constraint satisfaction and deliberation), several working memories, associative memory, episodic memory, emotion, several kinds of learning, and metacognition. They model much of human cognition. But, in addition, these agents include a module that models human consciousness according to global workspace theory (Baars 1988, 1997). Our aim in this work is twofold. We want to produce a useful conceptual and computational model of human cognition and consciousness. At the same time we aim to produce more flexible, more human-like AI systems. In global workspace theory Baars postulates that human cognition is implemented by a multitude of relatively small, special purpose processes, almost always unconscious. Communication between them is rare and over a narrow bandwidth. Coalitions of such processes find their way into a global workspace (and into consciousness). This limited capacity workspace serves to broadcast the message of the coalition to all the unconscious processors, in order to recruit other processors to join in handling the current novel situation, or in solving the current problem. Thus consciousness, according to this theory, allows us to deal with novelty or problematic situations that can’t be dealt with efficiently, or at all, by habituated unconscious processes. This is the key insight of global workspace theory. There’s much, much more to it than is stated here including volitional action via William James’ ideamotor theory.
منابع مشابه
Conag: a Reusable Framework for Developing \conscious" Software Agents
ConAg is a reusable framework, written in Java, for creating \conscious" software agents. The system provides an application skeleton that can be customized by developers. Its particular focus is on these agents' \consciousness" mechanism. A \conscious" software agent is a cognitive agent that integrates numerous arti cial intelligence mechanisms to implement the global workspace theory, a psyc...
متن کاملA Reusable Framework for Developing “Conscious” Software Agents
ConAg is a reusable framework, written in Java, for creating “conscious” software agents. Its particular focus is on these agents’ “consciousness” mechanism. A “conscious” software agent is an cognitive agent that integrates numerous artificial intelligence mechanisms to implement Bernard Baars’ global workspace theory, a psychological theory of mind. This article gives overviews of “conscious”...
متن کاملModeling Consciousness and Cognition in Software Agents
Here we describe the architectures of two “conscious” software agents and the relatively comprehensive conceptual and computational models derived from them. Modules for perception, working memory, associative memory, “consciousness,” emotions, action selection, deliberation, and metacognition are included. The mechanisms implementing the agents are mostly drawn from the “new AI,” but include b...
متن کاملAn Emotion-Based "Conscious" Software Agent Architecture
Evidence of the role of emotions in the action selection processes of environmentally situated agents continues to mount. This is no less true for autonomous software agents. Here we are concerned with such software agents that model a psychological theory of consciousness, global workspace theory. We briefly describe the architecture of two such agents, CMattie and IDA, and the role emotions p...
متن کاملMetacognition in Software Agents Using Classifier Systems
Software agents “living” and acting in a real world software environment, such as an operating system, a network, or a database system, can carry out many tasks for humans. Metacognition is very important for humans. It guides people to select, evaluate, revise, and abandon cognitive tasks, goals, and strategies. Thus, metacognition plays an important role in human-like software agents. Metacog...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- AI Commun.
دوره 13 شماره
صفحات -
تاریخ انتشار 2000